Note EFSPI 9th RegStat Workshop - Part A
EFSPI Meaning: EFSPI stands for the European Federation of Statisticians in the Pharmaceutical Industry. It’s a collective body representing statisticians involved in the pharmaceutical sector across Europe.
Foundation Year: EFSPI was founded in 1992, marking its establishment as an entity to organize and represent statisticians within the pharmaceutical industry at a European level.
Organizational Structure: EFSPI is described as an “umbrella” organization, indicating that it functions at a higher organizational level that encompasses various national groups rather than individual members. It is a non-profit organization, which means it does not operate to generate profit but rather to serve the interests of its members and the profession.
Federation Composition: The federation consists of national groups from different European countries, currently totaling 10. These groups collectively represent the interests of statisticians in the pharmaceutical industry at a national level.
Membership: It is noted that EFSPI does not have individual members. Instead, its structure is based on national organizations which collectively represent over 2000 members. This large number suggests a broad and significant representation within the pharmaceutical statistics community in Europe.
Representation on the EFSPI Council: Each national organization that is part of EFSPI has two members representing it on the EFSPI Council. This council likely serves as the governing body or decision-making panel for the federation.
Website: For more information, reference to their activities, publications, events, or any specific details about the federation’s operations and initiatives, the website provided is www.efspi.org.
Chairs: Khadija Rantell (MHRA, UK) and Fredrik Öhrn (J&J, SE)
Speaker: Jenny Devenport (Roche, CH)
Robustness - Definition: In statistics, robustness refers to the ability of statistical estimates to remain consistent and reliable even when there are violations of the assumptions upon which these estimates were initially based. - Context: This property is crucial because in real-world data analysis, assumptions such as normal distribution, homogeneity of variance, or linearity often do not hold perfectly. Robust statistical methods can still provide valid results despite such issues as non-normality, heteroscedasticity (non-constant variance), or outliers in the data. - Examples: An example of a robust statistical method is the median as a measure of central tendency. Unlike the mean, the median is not unduly affected by extreme values (outliers).
Uncertainty
Tthree main drivers: 1. Patients are Waiting: There’s a pressing need from patients who are in urgent need of new treatments, which creates a moral and social imperative to speed up the development and availability of drugs. 2. Healthcare Systems Need Effective Treatments: Efficient treatments are essential to alleviate the burden of disease on healthcare systems, which can improve patient outcomes and reduce overall healthcare costs. 3. Sustainable Innovation in the Pharmaceutical Industry: Innovation needs to be sustainable, ensuring that new developments are both effective and financially viable over the long term. This includes not only creating new drugs but also ensuring they can be produced and distributed sustainably.
This slide details specific ways in which patient advocacy groups are influencing faster drug development:
Both together emphasize a multi-faceted approach to accelerating drug development, involving patients directly and addressing systemic needs for efficiency and sustainability in healthcare. These efforts aim to ensure that new treatments not only reach the market more quickly but do so in a way that is safe and addresses the real needs of patients.
Implications
The slide you’ve shared addresses a significant trend in pharmaceutical research and development (R&D) efficiency, encapsulated by what’s known as “Eroom’s Law.” Here’s a breakdown and explanation of the concepts and issues presented:
Eroom’s Law - Definition: Eroom’s Law is the observation that drug discovery is becoming slower and more expensive over time, despite improvements in technology and science. It’s actually “Moore’s Law” spelled backwards, referring to the opposite trend observed in the electronics industry, where technology becomes faster and cheaper. - Trend Explanation: The graph shows a decline in R&D efficiency, which is measured as the number of new drug approvals per billion U.S. dollars spent. This decline suggests that despite increased investment in drug development, the output in terms of successful new drugs has not proportionally increased.
Statisticians provide a framework for addressing complex questions that drug development entails. Their expertise in quantitative analysis, experimental design, and critical evaluation of data ensures that pharmaceutical companies can navigate the intricate process of bringing a new drug to market effectively. They play a pivotal role in ensuring that the drugs that reach the market are backed by robust evidence, ultimately influencing healthcare outcomes positively. This role is crucial in maintaining the integrity of the drug development process, especially under the pressures of speed and innovation that the industry faces.
Details
Details
Integration of AI and ML in Drug Development - Technological Advances: Despite significant technological advancements in areas like combinatorial chemistry, DNA sequencing, and high-throughput screening, the expected surge in drug approvals hasn’t materialized. This discrepancy highlights the complex nature of translating scientific advances into practical, marketable therapies. - Role of AI and ML: These technologies are seen as potential game-changers in drug development. However, their effective integration requires a clear understanding of their capabilities and limitations. Statisticians and drug developers are encouraged to think critically about how AI and ML can be strategically deployed to improve various stages of drug development, from target identification to post-marketing surveillance.
Speaker: Kaspar Rufibach (CH)
Treatment Arms and Medications
Arm A (GClb): Combined RO5072759 (GA101), a type of monoclonal antibody, with chlorambucil, administered intravenously and orally, respectively.
Arm B (RClb): Combined rituximab, another monoclonal antibody, with chlorambucil.
Arm C (Clb): Chlorambucil alone, serving as the control group.
Involved 781 patients across multiple countries, including Germany, Australia, and the USA among others, highlighting its international scale and the broad applicability of its findings.
Enrollment spanned from April 2010 to July 2012, with the study concluding in August 2017. The results were published in February 2018.
The CLL11 trial’s results have had significant implications for the treatment of CLL, especially in patients with comorbid conditions. It supported the use of GA101 as a more effective treatment option, influencing treatment guidelines and practices worldwide.
The trial was pivotal in obtaining approval for new treatment protocols that offer better management of CLL, leading to it being marked as a breakthrough therapy and changing the standard of care for CLL patients.
The trial also contributed to the methodological discussion in clinical trials, particularly in the use of specialized statistical methods like close testing to handle multiplicity in data analysis, which ensures more robust and reliable conclusions.
Underutilization of Available Tools: The speaker believes that statisticians often don’t fully exploit their analytical toolbox, defaulting to standard two-arm randomized controlled trials (RCTs) without considering more innovative or suitable methodologies that could address specific research questions more effectively.
Stakeholder Engagement: Emphasizes the importance of convincing all stakeholders, not just regulators, about the value of a drug. This broader engagement is crucial for reducing time to market approval and reimbursement, which ultimately affects patient access to new therapies.
Operational Considerations: Discusses the operational challenges and biases that were encountered during the trial, highlighting the need for robust discussions with regulators and developers to ensure trial integrity and acceptance of the results.
Impact of the Trial: The trial led to the approval and reimbursement of a new treatment for chronic lymphocytic leukemia (CLL), marking it as a significant advancement in therapy. The trial’s design and outcomes were documented in clinical and statistical publications, although the speaker notes that these works are undercited and suggests they deserve more recognition.
Multiplicity and Close Testing: The close testing approach used in the trial is praised for its efficiency in dealing with multiplicity, a common challenge in clinical trials where multiple comparisons increase the risk of type I errors. Despite its success, the method hasn’t been widely adopted, which the speaker attributes to a lack of awareness or understanding within the broader research community.
Encouragement for Broader Adoption: The speaker advocates for a greater adoption of innovative statistical methods like close testing in clinical trials to improve efficiency and outcomes.
Educational Push: Suggests that more educational efforts are needed to increase the visibility and understanding of such methods among statisticians and drug developers.
The discussion you provided revolves around the complex dynamics of drug development, especially from the perspective of ensuring comprehensive satisfaction across all stakeholders: regulators, patients, physicians, and payers (HTA bodies). It touches on the nuanced challenges of achieving regulatory approval while also meeting market and clinical demands for clear, comparative effectiveness. Here’s a breakdown of the essential points:
Choosing the Right Strategy
The choice among these strategies depends on various factors: - Efficiency: How quickly and efficiently can the trial achieve its objectives without compromising the integrity and reliability of the results? - Resource Allocation: How are patient resources best utilized to yield meaningful data? - Statistical Power: Which method provides enough power to detect real differences without excessive risk of Type I errors? - Regulatory and Stakeholder Expectations: What are the expectations from regulatory bodies and other stakeholders regarding the evidence needed to support claims of efficacy and safety?
Efficiency: Closed testing where each comparison is analyzed once mature offers a balanced approach, providing relatively fast results for both regulatory approvals and satisfying HTA/patient concerns. It minimizes the delay usually associated with waiting for the last data point to mature (as seen in the last mature approach) and avoids the inefficiencies of running separate trials.
Statistical Power and Operational Bias: The approach with the least delay and operational bias while maintaining robust statistical power seems to be closed testing with each comparison analyzed once it matures. This strategy allows for a quicker response to emerging data without the stringent constraints of Bonferroni corrections.
Overall Consideration: A strategy that integrates closed testing with timely analysis of each mature comparison appears to be most advantageous. It aligns with regulatory timelines efficiently and addresses patient and HTA expectations more rapidly than other methods, thus facilitating a smoother and faster transition from trial completion to market availability.
Speakers: Karin Cerri and Lilla di Scala (J&J, CH)
Judith Anzures-Cabrera, Annabelle Monnet, and Alex Strasak (Roche)
An estimand is a precise description of what is being estimated from the data, which in this case assumes that death is the only intercurrent event (an event that occurs after treatment initiation and influences the interpretation of the primary outcome).
Endpoints are the primary outcomes used to judge the effectiveness of a treatment. - Primary Endpoint: Time to a progression event. This means the main measure of treatment effectiveness is how long it takes before the disease worsens. - Key Secondary Endpoint: A continuous measure that is on the same scale as the time to event, providing additional information on treatment effects.
Strategies for Handling Death as an Intercurrent Event
This refers to how the study addresses deaths that occur during the trial, as these can impact the assessment of treatment effectiveness. - Time to Event (Composite): Death is considered a progression event. This approach allows the study to categorize any deaths as an endpoint, simplifying the analysis but possibly conflating death with other types of progression. - Continuous (Hypothetical): This method uses a statistical model to estimate what would have happened if the patient had not died. It tries to predict the outcomes as if the patients continued in the study, offering a way to account for deaths without counting them as a direct outcome measure.
Context: For your continuous endpoint, you’ve opted for a hypothetical approach, which models what would have happened if the patient had not died. Given the low mortality rate (1%), you’re seeking advice on whether this approach adequately addresses the potential distortion of treatment effects due to death.
Key Points to Address: - Feasibility of the Hypothetical Model: Is this model considered robust and valid by regulatory standards, especially with such a low incidence of the intercurrent event (death)? - Impact on Treatment Effect Estimates: How might the low mortality rate influence the reliability and precision of the hypothetical estimand in measuring the continuous outcomes related to symptom progression?
James Bell (Elderbrook Solutions GmbH), Thomas Drury (GlaxoSmithKline), Tobias Mütze (Novartis Pharma AG), Christian Bressen Pipper (Novo Nordisk A/S), Marian Mitroiu (Biogen International GmbH), Khadija Rerhou Rantell (MHRA), Marcel Wolbers (Roche), David Wright (AstraZeneca)
Handling missing data in clinical trials is a critical issue, particularly when endpoints are not observed due to subjects dropping out or other intercurrent events (IE). Addressing this missing data appropriately is essential to ensure reliable and accurate estimation of treatment effects.
Common Approaches to Handling Missing Data
Key Findings from Work
Imputation Models Described: 1. Retrieved Dropout Models: - Assumptions: These models assume that missing data after an intercurrent event are similar to the observed data after such events within the same study arm. - Pros: - Minimal bias in realistic scenarios where assumptions hold. - No assumptions are made about the treatment effect beyond what’s observed. - Cons: - These models can have inflated standard errors and result in power loss when there is insufficient data post-IE. - They can be difficult or impossible to fit effectively if the data are sparse.
Question 1: “When estimating a (primary) estimand that adopts a treatment policy strategy, would regulators accept an analysis approach that imputes the missing values using a reference-based imputation method if the assumptions of the imputation approach can be clinically justified?”
Context: This question concerns the regulatory acceptance of using a reference-based imputation method to handle missing data, particularly when such an approach aligns with the treatment policy strategy.
Key Considerations: - Regulatory Perspective: Regulators, like the FDA, generally require robust justification for any imputation technique used. The acceptability hinges on the clinical justification of the assumptions underlying the reference-based imputation method. - Clinical Justification: For a reference-based approach to be accepted, the assumptions about the similarity of missing data to the reference group must align with known clinical behaviors of the disease and treatment. It’s essential to demonstrate that the reference group is an appropriate and representative comparator.
Context: When it’s unclear which assumptions are most appropriate for the missing data, selecting an imputation model can be challenging.
Key Principles: 1. Type I Error Rate Control: Maintaining the integrity of statistical testing by controlling the false positive rate. 2. Conservative Bias for Treatment Effect: Preferring an underestimate of the treatment effect to avoid overestimating the drug’s efficacy. 3. Bias-Variance Trade-Off: Balancing the reduction of bias against the increase in variance, aiming to optimize the reliability of the estimate. 4. Clinical Plausibility: Ensuring that the imputation method produces results that are believable and align with clinical knowledge.
Priority Order: - Clinical Plausibility: Often considered a top priority because if the imputed data are not clinically plausible, the integrity of the entire analysis can be questioned. - Type I Error Rate Control: Critical for regulatory approval, as controlling the false positive rate ensures that findings are statistically sound. - Conservative Bias for Treatment Effect: Important in risk management, especially in clinical decisions where safety is paramount. - Bias-Variance Trade-Off: Important but often secondary to the above considerations, as a methodologically sound approach can still yield useful results even with some trade-offs in bias and variance.
Additional Criteria: - Robustness to Assumptions: The selected imputation method should be robust to variations in the underlying assumptions about the missing data. - Transparency and Reproducibility: The method should be clear and reproducible, allowing other researchers to understand and replicate the findings.
Conclusion and Practical Application - Balance Between Statistical Rigor and Clinical Reality: The added discussion underscores the importance of balancing statistical rigor with clinical applicability. It suggests a dynamic approach where statistical methods are not only selected based on theoretical properties but also on how well they reflect and accommodate the realities of clinical practice. - Adaptive Strategies in Statistical Analysis: Given the complexity and variability in clinical trials, an adaptive approach to statistical analysis—responsive to emerging data and evolving clinical insights—is crucial. This approach ensures that the analysis remains relevant and robust across different stages of the clinical trial and subsequent regulatory review.
Fredrik Öhrn (J&J)
The slides you provided discuss two major approaches in clinical trial design: the Two Trials Rule and the Pooled Trials Rule. These rules are strategies for designing the trial structure and analyzing data to ensure that the treatment effects are not only statistically significant but also unlikely to be due to chance.
Two Trials Rule
Concept: - This rule requires two separate trials to be conducted. Each trial independently tests the treatment’s efficacy against a control or placebo. - The primary endpoint of each trial is analyzed using a one-sided significance test with a threshold of \(p = 0.025\), aiming to maintain an overall type I error rate of 5% across the two trials.
Application: - This is a traditional approach in many therapeutic areas, especially where more robust evidence is needed due to variability in treatment response or criticality of the treatment outcomes. - Common in areas like oncology, rare diseases, and large cardiovascular trials.
Pros and Cons: - Pros: Provides independent replication of results, enhancing confidence in the treatment’s efficacy. - Cons: More resource-intensive; can be redundant if trials are highly consistent.
Pooled Trials Rule
Concept: - A single larger trial is conducted instead of two smaller ones. The primary endpoint analysis is still assessed at the level of \(p = 0.025^2\) (i.e., 0.000625), to achieve the same overall type I error control as two separate trials. - This approach assumes the total sample size of the single trial equals the combined sample size of two individual trials (2N).
Application: - This method is suited for scenarios where pooling data into one larger trial can provide a clearer, more comprehensive understanding of the treatment effects and allow for early stopping and other interim analyses.
Pros and Cons: - Pros: - Potential for moderate power gains due to larger sample size in a single analysis. - Facilitates early stopping and other interim adaptations, which can save resources if the trial can be concluded early. - Streamlines operations by focusing on a single trial setup. - Cons: - Risks associated with basing decisions on a single trial, which could be more affected by an unusual outcome. - Concerns that it might motivate smaller, less robust study designs.
The Proposal: 1. Two Separate Trials (each with sample size \(N\)): Traditionally, conducting two independent trials provides a robust means of validating the treatment’s efficacy by allowing independent replication of the results. This method is particularly valuable in proving consistent effects across different study populations or conditions and is often used to satisfy regulatory requirements for drug approval.
The Question:
In which situations would it be appropriate to replace the two-trial approach (1) with a single, larger trial (2)?
Statistical Power: A single trial with a larger sample size (\(2N\)) may have greater statistical power to detect a treatment effect, especially if the effect size is small. This setup can be crucial in studies where the expected difference between treatment and control is subtle but clinically significant.
Resource Constraints: If resources are limited, conducting one larger trial might be more feasible than managing two separate trials. This can include financial constraints, limited patient populations (especially in rare diseases), or limited timeframes for study completion.
Regulatory and Strategic Considerations: If the regulatory body is open to innovative trial designs and the primary concern is demonstrating a strong, statistically significant result, one large trial might suffice, especially if combined with robust interim analyses that could further validate the findings without the need for a second trial.
Homogeneous Population: If the population is relatively homogeneous and external validity (generalizability across different populations) is less of a concern, a single trial might be more justified.
Interim Analyses and Early Stopping: Large single trials often facilitate the use of interim analyses, which can allow for early stopping for efficacy, thus potentially bringing effective treatments to patients sooner.
Phase 2 to 3
Summary of Feedback: The feedback converges on a few critical themes:
Marc Buyse and Samuel Salvaggio (One2Treat)
The provided images outline the design and multiple testing procedures for a clinical trial focused on advanced cancer, involving three arms: two experimental (A and B) and one control (C).
Trial Arms:
Randomization: Patients are randomly assigned in a 1:1:1 ratio to one of the three arms.
Comparisons:
Outcomes:
Multiple Testing Procedure
The proposal you’ve shared suggests an alternative approach for a clinical trial design focusing on advanced cancer, shifting from using Progression-Free Survival (PFS) to prioritizing outcomes like Overall Survival (OS) and Time to Tumor Progression (TTP).
Alternative Approach: Generalized Pairwise Comparisons (GPC) 1. Shift in Primary Outcomes: - Time to Death (OS): This measures the duration from the start of the trial until death from any cause, providing a direct indication of a treatment’s effect on survival. - Time to Tumor Progression (TTP): This tracks the time until the tumor shows signs of progression, which can be a more immediate and clear measure of a treatment’s efficacy compared to PFS.
Questions Regarding the Alternative Approach:
Complexity and Acceptability: The first question raises concerns about the complexity of explaining this approach and whether it would be acceptable to regulatory agencies. Given that regulatory bodies often require clear, robust evidence to support approval, the use of prioritized outcomes that directly reflect patient survival and disease progression may indeed be compelling. However, the shift from a more traditional endpoint like PFS to TTP and OS could require additional justification to demonstrate that these measures effectively capture the treatment benefits.
Feasibility of Implementation: The second question implicitly asks about the feasibility and practical challenges of implementing such a trial design. This involves considerations such as:
Kostas Sechidis, Mark Baillie, and Bjorn Bornkamp (Novartis)
The PPDAC framework, which stands for Problem, Plan, Data, Analysis, Conclusion, serves as a structured approach to ensure clarity, reliability, and replicability in data analysis and decision-making. This framework is rooted in statistical thinking and aims to guide researchers systematically through each phase of their investigation.
Problem (P): Identifying and pre-specifying the research questions or problems that need to be addressed. This involves defining the target estimand that the analysis will estimate.
Plan (P): Developing a comprehensive plan that aligns the questions with appropriate data collection and analysis strategies. This ensures that the data collected are suitable for addressing the research questions.
Data (D): Gathering and initially analyzing the data to understand the context, potential pitfalls, and the basic properties of the collected information.
Analysis (A): Implementing the analysis in a reproducible and accurate manner, ensuring that the methods and procedures used can be repeated and verified by other researchers.
Conclusion (C): Reporting the findings clearly and transparently, making it easy for other stakeholders to understand the results and the implications.
Application to Regulatory and Exploratory Analysis
Versatility of PPDAC: This cycle is applicable to both exploratory and confirmatory analyses, making it a robust framework for various phases of research, from initial hypothesis generation to final results confirmation.
Exploratory Assessments: Specifically, in the context of treatment effect heterogeneity, the PPDAC framework can help identify and explore variations in how different subgroups respond to a treatment within the exploratory phases of clinical research.
Purpose and Importance of Exploratory Analyses - Crucial Role in Drug Development: Exploratory analyses help address important questions that influence decisions on drug approval and labeling. These include identifying covariates for adjustment, assessing treatment effect heterogeneity, and conducting safety assessments. - Flexibility and Rigor: While these analyses offer the flexibility necessary for exploration and learning from data, they require increased self-discipline and rigor to ensure the outcomes are reliable and valid.
Challenges in Exploratory Analysis - Lack of Clear Guidelines: Unlike primary analyses, exploratory analyses often lack well-established guidelines, making the process less clear and potentially variable in quality. - Regulatory Interest and Comparisons: There is interest in creating structured guidelines similar to those in other fields such as pharmacometrics (Model Informed Drug Development, MIDD) and machine learning in medical devices (Good ML Practice for Medical Device Development). These frameworks provide structured approaches to integrating modeling and simulation into drug development and regulatory submissions.
The main question raised in the discussion pertains to the establishment of quality standards for exploratory analyses in drug development from a regulatory perspective. Specifically, the query seeks to open a dialogue on:
What are the quality standards for exploratory analyses?
This question aims to address the lack of clear, established guidelines for conducting exploratory analyses in drug development, which are crucial for making informed decisions about drug approval and labeling. The question underscores the need for a structured framework that ensures the reliability and validity of the results derived from such analyses.
Hong Sun (BMS)
Neoadjuvant → Surgery → Adjuvant: This is the typical sequence in many clinical trials involving surgery. Patients receive a neoadjuvant treatment (such as Drug X or standard of care (SoC)) before surgery. After surgery, patients continue with adjuvant treatment (either Drug X or SoC) during the post-surgical phase. The study also involves follow-up to monitor outcomes like recurrence and overall survival.
Radiologic Restaging: After the neoadjuvant treatment and before surgery, radiologic imaging is performed to assess how well the tumor has responded to the initial treatment. This information helps inform the surgical plan.
Study Design Options to Address the Contribution of Sequence:
Factorial Design or 3-arm Design: The FDA recommends using a factorial design or at least a 3-arm study design in peri-operative settings to clearly understand the effects of the adjuvant treatment and to address potential safety concerns like overdosing.
Factorial Design: In this design, all possible combinations of treatments are evaluated. For example, patients may receive Drug X both before and after surgery, Drug X only before surgery, Drug X only after surgery, or no Drug X (control or SoC group). This design helps isolate the specific contribution of each treatment period (neoadjuvant or adjuvant) to overall treatment efficacy.
3-Arm Design: In a simpler 3-arm design, patients are randomized into three groups:
This design allows researchers to compare the effects of different treatment sequences more straightforwardly.
FDA Recommendations:
Re-randomization involves randomizing participants again at a later stage of the trial, typically after completing an initial phase of treatment (e.g., after surgery or after neoadjuvant treatment). This method allows researchers to assess the impact of the subsequent treatment phases (such as adjuvant therapy) more clearly by reducing confounding factors and isolating the effects of different treatments or sequences.
One of the primary challenges in clinical trials with peri-operative settings is disentangling the effects of treatments administered at different times (neoadjuvant vs. adjuvant). By re-randomizing participants after a significant treatment milestone, like surgery, the study can separately evaluate the effects of each phase. For example: - Initial randomization could occur before surgery, where participants are assigned to different neoadjuvant treatments. - Re-randomization after surgery would then assign participants to different adjuvant treatments, allowing the effects of the adjuvant phase to be assessed separately from the neoadjuvant phase.
Re-randomization helps reduce potential biases that might arise if the effects of neoadjuvant treatments influence the outcomes of subsequent phases. By re-randomizing, the researchers can better control for the impact of the initial treatment phase and isolate the contribution of later phases (e.g., adjuvant therapy). This separation can clarify the sequence’s role in the overall treatment effect, offering a more rigorous evaluation of each phase’s contribution.
One of the concerns mentioned in the discussion is the challenge of underpowered studies, especially when it comes to understanding sequence contributions. By re-randomizing participants and collecting data on different treatment regimens at various stages, the trial may gain more statistical power to detect differences between sequences. This approach could lead to more robust conclusions about the most effective timing and combination of treatments.
Challenges and Considerations While re-randomization offers potential benefits, it also introduces complexity to trial design: - Sample size and logistics: The study may require a larger sample size to maintain sufficient power across multiple randomizations. - Patient adherence: Participants might be less willing to adhere to treatment if they feel uncertain about being re-randomized after a significant treatment phase like surgery. - Safety concerns: If there are long-term safety concerns related to one of the treatment phases, re-randomization might make it harder to fully assess the long-term effects of earlier treatments.
Chairs: Andreas Brandt (BfArM, DE) and Vivian Lanius (Bayer, DE)
While the estimand framework has been implemented across trials, the speaker feels that its full potential has not been realized. Its true power lies in helping researchers and regulators focus on the core questions of what they want to learn about a drug, why they are conducting the trial, and how to best interpret the results for clinical practice. Using the framework to get to the heart of these questions is essential to maximizing its impact.
Speakers: Frank Bretz (Novartis, CH, virtual) and Rob Hemmings (Consilium, UK)
Historical Context and Acknowledgment: The presentation highlights how estimates have evolved over the last decade, with early discussions about how to incorporate real-world data into trial designs and how to handle intercurrent events. Over the years, there has been more acknowledgment of these complex issues and a better understanding of how to manage them in clinical trials.
Data Analysis and Decision Making: The emphasis is on the fact that clinical trial analysis involves making choices about how to deal with intercurrent events and how to align those choices with the underlying clinical questions. The example given refers to how early discussions might have ignored data or not properly aligned the clinical and analytical decisions, but improvements have been made.
Regulatory Focus: Regulatory bodies, such as the FDA, have been involved in these conversations and highlight the importance of aligning strategies with clinical outcomes and patient retention. The estimation process and the strategies used must be transparent and justifiable.
Positive Reflection: The discussion reflects a positive development in how researchers are now tackling complex problems, such as intercurrent events, more openly and thoughtfully. There are still challenges, but the conversations are progressing, particularly in difficult-to-analyze datasets (like those involving early progression or deaths in neuroscience trials).
Objective:
revolve around analyzing whether the commonly used “treatment-policy strategy” in clinical trials truly reflects the underlying clinical question of interest and promoting a broader, interdisciplinary conversation around estimand implementation.
The goal is to: - Highlight key considerations: The slide aims to use examples (though non-exhaustive) to highlight critical points researchers and stakeholders should consider when evaluating if a particular strategy (in this case, the treatment-policy strategy) accurately represents the clinical question being investigated. - Promote a panel and audience discussion: The intention is to spark a deeper conversation on the predominance of the treatment-policy strategy and whether it truly reflects what clinical researchers want to learn about new treatments or medicines. - Encourage interdisciplinary involvement: The slide seeks to advocate for involving different disciplines in the estimand conversation, acknowledging that clinical trials are complex and may require input from multiple fields. - Clarify the purpose of clinical trials: A broader reflection is posed about the purpose of clinical trials, asking the question: What do we truly want to learn about new medicines?
Key Discussion Points:
Explanation of treatment-policy strategy: In this strategy, the occurrence of intercurrent events (such as discontinuation of treatment or use of additional therapies) is considered irrelevant in defining the treatment effect. This approach is often based on statistical rationale rather than clinical reasoning.
Regulatory context: Regulatory advice often defaults to this strategy, where outcomes are analyzed as though the treatment continues, even after an intercurrent event. This can sometimes overlook the clinical context, focusing purely on statistical considerations.
Introduction to the Issue: The slide begins by presenting the issue that regulatory guidance often defaults to using the treatment-policy strategy. This strategy essentially assumes that certain events (such as discontinuation of treatment) are irrelevant, focusing more on a generalized statistical approach rather than the clinical reality of the situation.
Example to Illustrate: An example is provided (though not fully detailed in the explanation), involving a drug named Mylotarg for AML (Acute Myeloid Leukemia). The aim of this example is to show how a treatment-policy strategy might fail to isolate the impact of survival but still suggest the drug is beneficial based on broader statistical results. The slide prompts the audience to think about whether this approach truly answers the clinical question or whether more nuanced strategies would provide clearer insights.
The discussion around treatment-policy questions complicated by rescue medication deals with how rescue treatments (administered during a trial to manage symptoms) can affect the interpretation of trial results, especially when evaluating long-term outcomes and comparing the effects of an experimental drug against a control or placebo.
1. Challenges in Assessing Treatment Effects with Rescue Medication:
2. Scenario A – Treatment-Policy Strategy and Rescue Medication:
3. Scenario B – Interpretability to Prescribers:
4. Treatment-Policy Questions and Hypothetical Strategies:
5. Conservative vs. Hypothetical Approach:
6. Communication to Prescribers:
Speaker: Miya Okada Paterniti (FDA, US, virtual)
Outline • Benefits of productive interdisciplinary collaboration • Challenges of implementing the estimand framework and discussing the clinical question of interest • Impact of estimand framework on regulatory processes
The estimand framework aims to describe the treatment effect of interest in a way that aligns with the clinical question being addressed.
Clinicians play a vital role by ensuring that the estimand captures the relevant clinical effects and takes into account various patient experiences or “journeys” during the trial.
The goal is to incorporate intercurrent events, such as discontinuation of treatment or rescue medications, which could affect the assessment of the treatment effect. This ensures the estimand mirrors the actual clinical question and treatment objectives.
The overlapping diagram illustrates a shift from estimands being distinct from clinical questions to their integration, where the estimand fully reflects the clinical question.
Effective engagement between clinicians and statisticians is crucial to translate clinical questions into estimands.
Clinicians provide the clinical insights to formulate the appropriate questions of interest, while statisticians ensure these questions are measurable and well-defined in statistical terms.
Statisticians also define the estimand terminology, which lays the foundation for understanding the core objectives of the study, helping clinicians make sense of how statistical strategies are used to answer clinical questions.
This collaboration helps ensure that trial designs and analyses align with the intended outcomes and clinical relevance.
One of the biggest challenges in the estimand framework is the knowledge gap in terminology, particularly on the clinician side. Clinicians need to understand statistical terms, while statisticians need to grasp the clinical context.
Each therapeutic area presents unique challenges, requiring tailored approaches to handle intercurrent events (e.g., patient dropout, treatment changes).
A common struggle is implementing strategies (like composite outcomes) for continuous endpoints that account for negative outcomes.
The “Learn as We Go” approach encourages trial designs that include supplementary analyses to assess the effects of intercurrent event strategies. These lessons can guide future trials in similar therapeutic areas.
Clinicians are typically more familiar with certain strategies like the treatment-policy and composite strategies. Other strategies, like principal strata, are less familiar and require more explanation.
It is helpful to relate these unfamiliar strategies to more commonly understood concepts like the intent-to-treat (ITT) principle. For example, explaining treatment policy strategies in terms of ITT helps clinicians better understand how the framework works.
Another common issue is distinguishing between strategies for handling missing data and intercurrent event strategies (ICEs). This needs to be made clear early on in the discussion to prevent misunderstandings.
Clinicians often respond better when ICE strategies are framed in clinical terms, so the communication process can be smoother when statistical terms are translated into clinical language.
This approach is essential in engaging clinicians more effectively in discussions about estimands, as it ties statistical concepts back to clinical relevance.
1. Treatment Policy Strategy: - Clinical Question: “What is the effect of the test treatment compared to control, regardless of whether the intercurrent event occurred?” - Example: If treatment discontinuation occurs (which is an intercurrent event), would you still want to know the effect of the drug compared to control, regardless of whether the patient continued or discontinued the treatment? - Explanation: This approach helps determine the overall treatment effect, irrespective of whether patients had to discontinue the treatment. This could reflect a more general understanding of how the drug works in real-world practice, where some patients may discontinue treatment for various reasons.
2. Composite Strategy: - Clinical Question: “What is the effect of the test treatment compared to control, reflecting the unfavorable nature of the intercurrent event on the interpretation of the outcome?” - Example: If death was the intercurrent event, should it be reflected in the assessment? If most subjects died during the trial but the effect was only measured in survivors, would that accurately reflect the clinical question? - Explanation: This strategy accounts for how unfavorable intercurrent events (like death) affect the outcome. It aims to provide a more accurate understanding of how the drug impacts survival or progression, even if the clinical situation worsens significantly.
Why This Matters:
Speaker: Laura Rodwell (Medicines Evaluation Board, NL)
1. Role of Assessors at the Medicines Evaluation Board:
2. Important Reports and Documents in the Process:
3. Challenges with the Estimand Framework:
The estimand framework was introduced as part of the ICH E9 (R1) guidelines to address intercurrent events and ensure that clinical trials focus on answering the correct clinical questions.
The statistical assessors were quicker to adopt the estimand framework, as it aligns well with their problem-solving approaches. However, clinical assessors had a steeper learning curve in integrating this framework into their assessments.
The role of the statistical assessor evolved into that of a trainer and facilitator, helping the clinical assessors understand the estimand approach. There was a need to ensure that clinical assessors saw the estimand as a tool to resolve issues and ensure alignment between clinical and statistical assessments.
The speaker highlights that while scientific advice was designed to discuss the estimand framework upfront, the integration of estimands into the assessment process was less clear and needed to be defined further as the process evolved.
The estimand framework’s implementation required collaboration between clinical and statistical teams.
The role of the statistical assessor has expanded to include educating clinical assessors on the estimand and ensuring its proper application in both the design and evaluation of clinical trials.
Early and continuous discussion of the estimand during scientific advice meetings is crucial to aligning clinical trials with the correct clinical questions.
Greg Levin from the FDA is emphasizing the need for clinical trials to be designed in a way that informs real-world treatment decisions for both prescribers and patients. He is advocating for aligning trial designs with actual clinical practice to improve their relevance and impact on public health.
Chairs: Kit Roes (Chair of MWP EMA, NL) and Emmanuel Zuber (Independent consultant, CH)
Speaker: Dr Duanduan Cong (Center for Drug Development-CDE NMPA, CN)
Outline • Latest advancements in scientific regulatory system • Navigating Challenges and Seizing Opportunities in a Changing Landscape • Conclusion and outlook
NMPA, or National Medical Products Administration, is the primary regulatory authority in China responsible for the oversight of drugs, medical devices, and cosmetics. Its major function is to ensure the safety, efficacy, and quality of these products.
CDE is a key affiliated agency of the NMPA that is primarily responsible for the scientific evaluation and decision-making regarding drug applications. The main responsibilities of CDE are highlighted in the image and include:
Organizational Structure
The organizational chart in the image is divided into three main categories:
The transformation of China’s pharmaceutical industry and regulatory landscape reflects a strategic shift from primarily focusing on generic drugs to a balanced development of both innovative and generic medicines. This adjustment aligns with a broader commitment to enhance domestic capabilities in drug research and development (R&D) and to cater to global competitive pressures.
Shift in Focus: Since the reform in 2015, China’s drug R&D has transitioned from emphasizing generic drug production to fostering an environment that equally values innovation and imitation. This change is propelled by the need to compete globally and to meet internal demands for more advanced therapeutic solutions.
Growth in Drug Trials: The number of new drug clinical trials has seen a year-on-year increase, indicating a robust pipeline for innovative drugs. For example, from 2020 to 2023, there was a steady rise in the registration of new drug trials, showcasing heightened activity in drug development:
The largest proportion of these trials has been in the field of oncology, exceeding 40%, highlighting a significant focus on cancer therapies.
Diversity in R&D: The R&D pipeline is varied, targeting diseases across different systems such as endocrine, digestive, and mental health, along with critical areas like blood disorders and anticancer therapies. This diversity underpins the potential for launching multiple innovative drugs in the future.
Regulatory Enhancements
Scientific and Rational Framework: Ongoing refinements to drug regulatory laws aim to align more closely with global standards, ensuring a scientific and rational approach that maintains safety and efficacy throughout the drug development lifecycle.
Efficiency in Drug Approval: Reforms in the drug review and approval process have streamlined procedures, significantly reducing the time to market for new drugs while ensuring high safety standards. This not only accelerates patient access to new treatments but also improves the overall efficiency of the pharmaceutical sector.
Specialized Drug Categories: There is an increased focus on developing regulatory requirements for drugs targeting major diseases and special categories such as pediatric medications and narcotics. This ensures thorough review processes and enhances the safety and effectiveness of these drugs in clinical use.
Advanced Risk Management: The enhanced monitoring of drugs pre- and post-market includes a robust system for reporting adverse drug reactions, ensuring ongoing safety assessments throughout the drug’s lifecycle.
Digital Advancements: The adoption of digital tools and the development of national drug supervision platforms have enhanced the transparency and efficiency of regulatory processes, facilitating real-time data sharing and more effective oversight.
Focus on Clinical Value and Public Engagement: Drug regulations increasingly prioritize clinical value and public involvement, ensuring that the development and approval of new drugs meet actual clinical needs and engage public feedback, thereby aligning drug development with patient-centric outcomes.
Overall, these steps aim to modernize and make more effective the regulatory framework surrounding the development, approval, and monitoring of pharmaceuticals in response to evolving healthcare needs and technological advancements.
1. Special Review and Approval Procedure: - Activated during public health emergencies. - Facilitates quick approval of essential therapeutic and prophylactic drugs as mandated by law.
2. Priority Review and Approval Procedure: - Targets drugs with urgent clinical needs or those addressing major infectious diseases and rare diseases. - Includes innovative drugs, pediatric formulations, vaccines, and breakthrough therapy drugs. - Emphasizes drugs that can significantly improve clinical outcomes.
3. Breakthrough Therapy Designation (BTD): - Applies to drugs for serious life-threatening conditions that significantly improve upon existing therapies. - Requires strong evidence demonstrating substantial clinical benefits over existing treatments.
4. Conditional Approval Procedure: - Used for drugs treating serious diseases with no effective treatments. - Based on promising clinical trial data predicting the drug’s value. - Includes vaccines needed urgently for public health emergencies.
These procedures are designed to expedite the development, review, and approval of drugs that address critical and urgent health needs, enhancing the speed and responsiveness of the healthcare system to emerging challenges.
1. International Collaboration: - NMPA’s participation in the International Council for Harmonisation (ICH) since 2017. - Active involvement in setting global standards, having adopted all 69 ICH guidelines.
2. Implementation of ICH Guidelines: - Fast-tracked through targeted training and the publication of interpretative documents. - Enhances integration into drug R&D and approval processes.
3. Global Influence and Collaboration: - NMPA has strengthened global ties and influence in pharmaceutical regulations through active participation in ICH activities. - Focus on aligning domestic regulations with international standards to improve the global competitiveness of China’s pharmaceutical industry.
These efforts by the NMPA to align with ICH standards and expedite drug approvals underscore a commitment to integrating global best practices into China’s regulatory framework. This strategy not only improves the efficiency and efficacy of the drug approval process but also enhances China’s role in the global pharmaceutical landscape. These initiatives ensure that innovative treatments, particularly those with high clinical value, reach the market more quickly, benefiting patients both domestically and globally.
Provides a clear overview of the evolving landscape of innovative drug development, emphasizing the need for proactive strategies to stay ahead in a dynamic pharmaceutical sector.
As China continues to develop its innovation ecosystem, marked by a growing number of high-tech startups and research initiatives, it further solidifies its position as a global hub for pharmaceutical research and development. Despite facing several challenges, the convergence of these dynamic factors provides a fertile environment for groundbreaking advancements in drug development. By leveraging its strengths and strategically addressing remaining hurdles, China is poised to make significant contributions to global health, offering new solutions for pressing clinical needs.
To effectively advance the field of innovative drug development and meet the evolving demands of healthcare: - Aligning Standards with Global Practices: It’s critical to align our drug review standards with international best practices. Active participation in global regulatory harmonization efforts will streamline processes and facilitate the development of new therapies worldwide. - Accelerating Marketing of Urgent Drugs: Continuing to deepen reforms in our review and approval systems is essential. This will speed up the availability of clinically urgent drugs, particularly those targeting severe life-threatening conditions. - Prioritizing Oversight and Quality: We must prioritize the oversight of innovative drugs and advanced therapies, focusing intensively on drug quality, efficacy, and safety to ensure public health and that new treatments meet the highest standards. - Supporting Global R&D and Special Populations: Encouraging simultaneous global R&D efforts and fostering the development of drugs for special populations, such as those with rare diseases and pediatric conditions, will broaden the scope and impact of our pharmaceutical innovations. - Enhancing Regulatory Science and Supervision: Committing to regulatory science research and exploring innovative supervision methodologies will enhance the efficiency and intelligence of drug oversight. - Strengthening International Collaboration: Strengthening relationships with regulatory agencies worldwide and fostering collaboration among government, academia, and industry is crucial for promoting innovation and supporting the development of new drugs. - Building a Diverse Talent Ecosystem: Creating a strong talent base and a diverse and dynamic ecosystem for drug innovation will further propel our progress.
Speaker: Dr Jianhong Pan (CDE, CN)
Speaker: Prof Hou (Peking University, CN)
Outline
Strengthening Regulatory Frameworks
AI and Advanced Technologies Integration
The slide titled “Focus on Innovation-Driven Regulation” highlights China’s strategic direction in regulatory science, specifically how it facilitates the integration of innovative therapies and medical devices.
Accelerating Processes: AI is significantly enhancing the efficiency of the drug discovery and development process. By utilizing machine learning models and algorithms, AI can analyze vast datasets to identify potential drug candidates faster than traditional methods. Additionally, AI can optimize clinical trial design and predict outcomes, thereby streamlining the development process and reducing time to market.
Real-Time Monitoring: AI-powered systems are being employed to monitor adverse drug reactions (ADRs) in real time. These systems can process large volumes of data from clinical trials and real-world use, detecting patterns and signals that may indicate potential safety issues. This proactive surveillance helps in ensuring that drugs remain safe for the public post-approval, enhancing patient safety and compliance.
AI-Driven Precision Medicine Supporting Personalized Medicine: AI plays a crucial role in the advancement of personalized medicine, where treatments are tailored to the individual characteristics of each patient. Through AI algorithms, it is possible to analyze genetic information, lifestyle data, and clinical histories to devise personalized treatment plans that are more effective and have fewer side effects. AI-driven diagnostics also contribute by identifying disease with greater accuracy and at earlier stages.
Overall Impact
Challenges and Solutions
Data Privacy and Security
Talent and Infrastructure
Algorithmic Transparency
Flexibility and Cost Reduction: By adopting new scientific designs and utilizing AI, the goal is to increase flexibility in regulatory processes and reduce development costs. This approach leverages advanced analytical capabilities of AI to handle complex data efficiently, potentially streamlining regulatory pathways and reducing time-to-market for new therapies.
Real-World Evidence Utilization: There’s a significant focus on integrating RWE into the regulatory framework. RWE involves data gathered from real-world medical settings—outside of conventional clinical trials—which can offer insights into how treatments perform in broader, more diverse populations. AI tools are employed to analyze this healthcare data to generate actionable insights that inform regulatory decisions, enhancing the relevance and applicability of regulatory assessments.
ICH E17 Implementations and Workshops: The integration of the ICH E17 guidelines into China’s regulatory framework has been a focal point of recent efforts. Workshops that bring together academic experts, industry professionals, and regulators have been crucial. These workshops discuss the implementation challenges and solutions regarding the guidelines, facilitating a collaborative approach to regulatory innovation.
Academic and Industry Collaboration: Experts from top universities and biotech companies are engaged to develop robust statistical frameworks and ensure practical, viable development models. This collaboration helps align academic research with industrial practices and regulatory standards.
Development of Industry Blue Books and Consensus Reports: Outputs from these collaborations include industry blue books and consensus reports, which aim to guide the pharmaceutical industry and regulatory bodies. These documents compile shared insights and provide comprehensive guidelines for effectively navigating the complexities introduced by the new regulatory frameworks like ICH E17.
Addressing Implementation Issues: The discussions and collaborations aim to address and resolve practical issues encountered with the implementation of ICH E17 in China. By identifying these challenges and working through consensus-building activities, the goal is to create adaptable solutions that can be integrated into the regulatory practices.
Special Considerations and Statistical Methods
Addressing Unique Challenges: The approach takes into account special considerations such as non-majority trials, multiple primary endpoints, interim analyses, delayed effects, and adaptive designs. These considerations are crucial for trials involving complex diseases or treatments that require flexible and adaptive trial designs.
Statistical and Methodological Development: There is a focus on developing statistical methods that can handle the complexities of MRCTs, especially in understanding and addressing inconsistencies across different regions. This also involves predicting long-term outcomes based on early trial data, which is particularly challenging in diseases like gastrointestinal (GI) cancers prevalent in China.
Speaker: Dr Xiaoni Liu (Novartis, CN)
Regulatory Reforms in China: She discusses the significant regulatory reforms that have taken place in China since 2015, which were aimed at enhancing the efficiency and transparency of the drug review and approval process. These reforms include the adoption of new laws and regulations that streamline the approval processes for clinical trials and new drugs.
Adoption of ICH Guidelines: A pivotal aspect of these reforms was China joining the ICH (International Council for Harmonisation) in 2017, aligning China’s drug regulations with global standards. The implementation of ICH guidelines has been a key part of her discussions, illustrating how these guidelines have been integrated into the Chinese regulatory framework.
SH E17 and MRCT Blue Book Project: She highlights her involvement in projects like the SH E17 MRCT (Multi-Regional Clinical Trials) Blue Book project team, which collaborates with various stakeholders to ensure effective implementation of these guidelines in China.
Enhanced Drug Development and Approval Processes: The reforms have allowed for more rapid development and approval of new drugs in China, reducing the previous delays that could extend up to four or five years for bringing innovative drugs to the Chinese market.
Simultaneous Development and Submission: The regulatory changes have facilitated simultaneous development and submission strategies, making MRCT the preferred approach and significantly shortening review timelines for drugs in China.
The efforts to harmonize China’s regulatory practices with global standards through the SH E17 MRCT Blue Book project exemplify a comprehensive and collaborative approach. This initiative not only enhances the international compatibility of China’s drug approval processes but also ensures that these processes are scientifically rigorous and tailored to meet the specific needs of the Chinese population as well as global diversity in clinical practices. This strategic alignment is crucial for facilitating faster and more efficient drug approvals, ultimately leading to better patient outcomes.
Not defined in the Blue Book, will be discussed in the next version